Goto

Collaborating Authors

 policy statement


PolicyPad: Collaborative Prototyping of LLM Policies

Feng, K. J. Kevin, Kuo, Tzu-Sheng, Ze, Quan, Chen, null, Cheong, Inyoung, Holstein, Kenneth, Zhang, Amy X.

arXiv.org Artificial Intelligence

As LLMs gain adoption in high-stakes domains like mental health, domain experts are increasingly consulted to provide input into policies governing their behavior. From an observation of 19 policymaking workshops with 9 experts over 15 weeks, we identified opportunities to better support rapid experimentation, feedback, and iteration for collaborative policy design processes. We present PolicyPad, an interactive system that facilitates the emerging practice of LLM policy prototyping by drawing from established UX prototyping practices, including heuristic evaluation and storyboarding. Using PolicyPad, policy designers can collaborate on drafting a policy in real time while independently testing policy-informed model behavior with usage scenarios. We evaluate PolicyPad through workshops with 8 groups of 22 domain experts in mental health and law, finding that PolicyPad enhanced collaborative dynamics during policy design, enabled tight feedback loops, and led to novel policy contributions. Overall, our work paves participatory paths for advancing AI alignment and safety.


Hong Kong preparing policy statement for artificial intelligence in finance

The Japan Times

The Hong Kong government is preparing to issue its maiden policy statement on the use of artificial intelligence in finance, according to people familiar with the matter, in a move that could catalyze the use of the technology in areas from trading to investment banking and cryptocurrencies. The city's Financial Services and Treasury Bureau plans to issue a framework of guidelines to touch on the ethical use of AI and general principles for applying the technology in the finance world, the people said, asking not to be identified discussing private information. Officials are still drafting the document while getting feedback from the industry, the people said. Details are still subject to change in the coming weeks, they added. While specifics remain unclear, the document is broadly intended to signal Hong Kong's support for AI, as governments around the world get to grips with the technology's potential.


Bias in LLMs as Annotators: The Effect of Party Cues on Labelling Decision by Large Language Models

Vera, Sebastian Vallejo, Driggers, Hunter

arXiv.org Artificial Intelligence

The increasing sophistication of large language models (LLMs) has allowed for their more prominent presence in political science research. One particular area gathering significant attention in the field is the use of LLMs as annotators. Research has shown promising results, with LLMs often outperforming human coders (Gilardi, Alizadeh and Kubli, 2023) and providing comparable accuracy when labelling political text, across multiple languages (Heseltine and Clemm von Hohenberg, 2024). While researchers have evaluated the performance of LLMs as annotators across different domains, there still little information on how the known biases of LLMs (see Gallegos et al., 2024) can affect their performance. For human annotators, studies show that political cues, such as party, have an effect on their coding decisions (Laver and Garry, 2000; Benoit et al., 2016; Ennser-Jedenastik and Meyer, 2018).


Embracing the Generative AI Revolution: Advancing Tertiary Education in Cybersecurity with GPT

Nowrozy, Raza, Jam, David

arXiv.org Artificial Intelligence

The rapid advancement of generative Artificial Intelligence (AI) technologies, particularly Generative Pre-trained Transformer (GPT) models such as ChatGPT, has the potential to significantly impact cybersecurity. In this study, we investigated the impact of GPTs, specifically ChatGPT, on tertiary education in cybersecurity, and provided recommendations for universities to adapt their curricula to meet the evolving needs of the industry. Our research highlighted the importance of understanding the alignment between GPT's ``mental model'' and human cognition, as well as the enhancement of GPT capabilities to human skills based on Bloom's taxonomy. By analyzing current educational practices and the alignment of curricula with industry requirements, we concluded that universities providing practical degrees like cybersecurity should align closely with industry demand and embrace the inevitable generative AI revolution, while applying stringent ethics oversight to safeguard responsible GPT usage. We proposed a set of recommendations focused on updating university curricula, promoting agility within universities, fostering collaboration between academia, industry, and policymakers, and evaluating and assessing educational outcomes.


UK lays out regulatory model for Artificial Intelligence

#artificialintelligence

The UK is setting the stage for its future Artificial Intelligence (AI) regulatory model. Much like the EU, it suggests adopting a risk-based approach but will differ from the bloc by entrusting enforcement to a panel of regulators. The British government presented its "pro-innovation approach to regulating AI" on Monday (18 July) alongside its new Data Protection and Digital Information Bill. It follows the presentation of the National AI Strategy last September, a ten-year plan to ensure the UK becomes a global AI superpower. The country has invested more than £2.3 billion (€2.7 billion) in AI since 2014.


The Obligatory Artificial Intelligence Year End Article, 2020 Edition

#artificialintelligence

At the end of 2019, my similar column mentioned retail uses of vision and chatbots being more integrated into other applications during 2020. That happened and chatbots have now become a “must have” in most customer interaction interfaces.


Malicious Code Execution Detection and Response Immune System inspired by the Danger Theory

Kim, Jungwon, Greensmith, Julie, Twycross, Jamie, Aickelin, Uwe

arXiv.org Artificial Intelligence

The analysis of system calls is one method employed by anomaly detection systems to recognise malicious code execution. Similarities can be drawn between this process and the behaviour of certain cells belonging to the human immune system, and can be applied to construct an artificial immune system. A recently developed hypothesis in immunology, the Danger Theory, states that our immune system responds to the presence of intruders through sensing molecules belonging to those invaders, plus signals generated by the host indicating danger and damage. We propose the incorporation of this concept into a responsive intrusion detection system, where behavioural information of the system and running processes is combined with information regarding individual system calls.